Low-rank modeling has a lot of important applications in machine learning,computer vision and social network analysis. While the matrix rank is oftenapproximated by the convex nuclear norm, the use of nonconvex low-rankregularizers has demonstrated better recovery performance. However, theresultant optimization problem is much more challenging. A very recentstate-of-the-art is based on the proximal gradient algorithm. However, itrequires an expensive full SVD in each proximal step. In this paper, we showthat for many commonly-used nonconvex low-rank regularizers, a cutoff can bederived to automatically threshold the singular values obtained from theproximal operator. This allows the use of power method to approximate the SVDefficiently. Besides, the proximal operator can be reduced to that of a muchsmaller matrix projected onto this leading subspace. Convergence, with a rateof O(1/T) where T is the number of iterations, can be guaranteed. Extensiveexperiments are performed on matrix completion and robust principal componentanalysis. The proposed method achieves significant speedup over thestate-of-the-art. Moreover, the matrix solution obtained is more accurate andhas a lower rank than that of the traditional nuclear norm regularizer.
展开▼